Apple has poached dozens of artificial intelligence experts from Google and created a "secretive European laboratory" in Zurich to house a new team of staff tasked with building new AI models and products, according to a paywalled Financial Times report.
Based on an analysis of LinkedIn profiles conducted by FT, Apple has recruited at least 36 specialists from Google since 2018, when it poached John Giannandrea to be its top AI executive.
Apple's main AI team works out of California and Seattle, but the company has recently expanded offices dedicated to AI work in Zurich, Switzerland. Apple's acquisition of local AI startups FaceShift (VR) and Fashwell (image recognition) is believed to have influenced its decision to build a secretive research lab known as "Vision Lab" in the city.
According to the report, employees based in the lab have been involved in Apple's research into the underlying technology that powers OpenAI's ChatGPT chatbot and similar products based on large language models (LLMs). The focus has been on designing more advanced AI models that incorporate text and visual inputs to produce responses to queries.
The report suggests that Apple's recent work on LLMs is a natural outgrowth of the company's work on Siri over the last decade:
The company has long been aware of the potential of "neural networks" — a form of AI inspired by the way neurons interact in the human brain and a technology that underpins breakthrough products such as ChatGPT.
Chuck Wooters, an expert in conversational AI and LLMs who joined Apple in December 2013 and worked on Siri for almost two years, said: "During the time that I was there, one of the pushes that was happening in the Siri group was to move to a neural architecture for speech recognition. Even back then, before large language models took off, they were huge advocates of neural networks."
Currently, Apple's leading AI group includes notable ex-Google personnel such as Giannandrea, former head of Google Brain, which is now part of DeepMind. Samy Bengio, now senior director of AI and ML research at Apple, was also previously a leading AI scientist at Google. The same goes for Ruoming Pang, who directs Apple's "Foundation Models" team focusing on large language models. Pang previously headed AI speech recognition research at Google.
In 2016, Apple acquired Perceptual Machines, a company that worked on generative AI-powered image, detection, founded by Ruslan Salakhutdinov from Carnegie Mellon University. Salakhutdinov is said to be a key figure in the history of neural networks, and studied at the University of Toronto under the "godfather" of the technology, Geoffrey Hinton, who left Google last year citing concerns about the dangers of generative AI.
Salakhutdinov told FT that one reason for Apple's slow AI rollout was the tendency of language models to provide incorrect or problematic answers: "I think they are just being a little bit more cautious because they can't release something they can't fully control," he said.
iOS 18 is rumored to include new generative AI features for Siri, Spotlight, Shortcuts, Apple Music, Messages, Health, Keynote, Numbers, Pages, and other apps. These features are expected to be powered by Apple's on-device LLM, although Apple is also said to have discussed partnerships with Google, OpenAI, and Baidu.
A first look at the AI features that Apple has planned should come in just over a month, with iOS 18 set to debut at the Worldwide Developers Conference that kicks off on June 10.
Top Rated Comments
Also, if Apple is behind, why did it include a Neural Engine specifically for “AI” in iPhones starting in 2017 ('https://github.com/hollance/neural-engine/blob/master/docs/supported-devices.md')? This was used initially for tasks like Face ID, but Apple didn't stop there. Apple has been putting a lot of effort into boosting up its Neural Engine ('https://machinelearning.apple.com/research/neural-engine-transformers'). Apple brought the Neural Engine to the Mac with the M1 where it is used for many "AI" tasks. It also is really good at running machine learning tasks, including LLMs. I can run local LLMs faster on my M1 Pro than I can on my 5900X with an RTX 3070. That is because of several factors, including the Neural Engine. That it can hold its own or beat CUDA on a decent GPU is impressive.
Apple doesn’t just throw things against the wall to see what sticks. There’s a longer plan to what Apple does. We are also in the very early days of LLMs. I love them more than the next guy (I have a ChatGPT subscription and use it nearly every day for many tasks), but they are not sustainably scalable. Something has to improve in their efficiency. The publicly available models from OpenAI, Meta, Alphabet, and other companies are also not private. The companies all can look at what you put in to improve their models. Addressing these are part of what Apple is working on.